# ImageNet Pre-training
Efficientnet B4
Apache-2.0
EfficientNet is a mobile-friendly pure convolutional model that uniformly scales depth, width, and resolution dimensions, trained on the ImageNet-1k dataset.
Image Classification
Transformers

E
google
5,528
1
Vit Base Patch8 224.dino
Apache-2.0
A vision Transformer (ViT) image feature model trained with the self-supervised DINO method, suitable for image classification and feature extraction tasks.
Image Classification
Transformers

V
timm
9,287
1
Nat Mini In1k 224
MIT
NAT-Mini is a lightweight vision Transformer model based on neighborhood attention mechanism, designed for ImageNet image classification tasks
Image Classification
Transformers Other

N
shi-labs
109
0
Mobilevit Small
Other
MobileViT is a lightweight, low-latency vision Transformer model that combines the strengths of CNNs and Transformers, making it suitable for mobile devices.
Image Classification
Transformers

M
apple
894.23k
65
Regnet Y 160
Apache-2.0
RegNet model trained on the ImageNet-1k dataset, an efficient vision model designed through neural architecture search
Image Classification
Transformers

R
facebook
18
0
Regnet Y 064
Apache-2.0
RegNet model trained on ImageNet-1k, an efficient vision model designed through neural architecture search
Image Classification
Transformers

R
facebook
17
0
Regnet Y 040
Apache-2.0
RegNet model trained on ImageNet-1k, an efficient vision model designed through neural architecture search
Image Classification
R
facebook
2,083
1
Regnet Y 032
Apache-2.0
RegNet image classification model trained on ImageNet-1k, featuring an efficient network structure designed through neural architecture search
Image Classification
Transformers

R
facebook
21
0
Regnet Y 006
Apache-2.0
RegNet is an image classification model designed through neural architecture search, trained on the ImageNet-1k dataset.
Image Classification
Transformers

R
facebook
18
0
Regnet Y 004
Apache-2.0
RegNet is a vision classification model trained on ImageNet-1k, featuring an efficient network structure designed through Neural Architecture Search (NAS)
Image Classification
Transformers

R
facebook
17
0
Van Small
Apache-2.0
A visual attention network model trained on ImageNet-1k, capturing local and long-range relationships through convolution operations
Image Classification
Transformers Other

V
Visual-Attention-Network
15
1
Resnet18
Apache-2.0
ResNet18 is an image classification model based on deep residual learning, which solves the difficulty of training deep networks through residual connections.
Image Classification
Transformers

R
glasses
15
0
Deit Base Distilled Patch16 224
Apache-2.0
The distilled version of the Efficient Data Image Transformer (DeiT) model was pre-trained and fine-tuned on ImageNet-1k at 224x224 resolution, extracting knowledge from a teacher model through distillation learning.
Image Classification
Transformers

D
facebook
35.53k
26
Deit Small Patch16 224
Apache-2.0
DeiT is a more efficiently trained Vision Transformer model, pre-trained and fine-tuned on the ImageNet-1k dataset at 224x224 resolution, suitable for image classification tasks.
Image Classification
Transformers

D
facebook
24.53k
8
Featured Recommended AI Models